#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let's stay smart together. (Subscribe to the mailing list to receive future issues.)

The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors' work for "training artificial intelligence to generate text." Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author's manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author's consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.

Tech companies try to assure. AI model developers and deployers are attempting to assuage prospective customers that the generative AI tools they're offering will not present substantial business risk. Currently, OpenAI's main business model is to encourage third-party app developers to build software on top of its models, like GPT, using the models as a back-end engine. Unsurprisingly, therefore, Business Insider recently reported that CEO Sam Altman has been meeting with developers privately to tell them OpenAI won't launch competitive apps of its own, aside from the already-everywhere ChatGPT.

At the same time, OpenAI released a set of "safety best practices" for developers looking to integrate with OpenAI models. The tips are worth skimming to understand the myriad limitations still present in models like GPT—they're not plug-and-play. The guidelines include suggestions to always keep a "human in the loop" to review model output; to aggressively test the model through "red team" attacks before deploying; and, in operation, to input into the model multiple examples of "high quality" output at the time of the generation request so as to "help constrain the topic and tone of output text."

Adobe, for its part, has been vocally marketing its own AI image generator, Firefly, touting that it was trained only on public domain, licensed, or Adobe-owned images, thereby mitigating the risk that Firefly will produce copyright-infringing output. To make that point more emphatically, Adobe is now promising, according to Fast Company magazine, to indemnify Firefly users against any third-party copyright infringement claims—offer good for enterprise customers only, however.

Not to be outdone, Microsoft on June 8 announced on its blog "three AI Customer Commitments to assist our customers on their responsible AI journey." Intriguing but not yet well-defined is its second commitment, the creation of an "AI Assurance Program" to help customers confirm that the Microsoft AI applications they deploy in-house "meet the legal and regulatory requirements for responsible AI." The blog post suggests one main component of this second commitment will have Microsoft engage in public policy advocacy for appropriately scoped AI regulations.

Section 230 enters the picture. In broad strokes, Section 230 of the Communications Decency Act protects online platforms from being held liable in litigation for "information provided by another"—user-generated comments, reviews, and uploaded videos are quintessential examples. One issue increasingly debated is whether this Section 230 safe harbor should extend to generative AI tools. The answer is complex and implicates the specific architecture of the AI service at issue. Our bet: an app-level service that merely passes through to the end user synthetic content created by a third-party's LLM back-end is more likely to succeed in invoking the safe harbor, at least as currently written.

And now we have our first test case! On June 5, a radio host sued OpenAI in a Georgia state court for defamation, alleging that ChatGPT falsely stated to a reporter that the host had embezzled from a nonprofit. We're eager to see whether OpenAI in its response (not yet filed) invokes Section 230 or whether it concedes that the safe harbor does not apply to its own conduct in running its generative AI models, as CEO Sam Altman implied in his Senate testimony last month.

On the lighter side. Is ChatGPT funny? Like a clown? Does it amuse you? In a preprint paper released on June 7, two German researchers suggested that the AI tool as comedian would soon wear out its welcome: Over 90% of 1,008 jokes the testers generated were the same 25 jokes (mostly dad ones). This perhaps raises a deep sociological question about the extent to which widespread use of LLMs might translate into growing homogeneity of culture, but let's table that debate in favor of a query about tomatoes, number two on the researchers' joke list: Why did the tomato turn red? Because it saw the salad dressing.

Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.